220 research outputs found

    Embodied cognition through cultural interaction

    Get PDF
    In this short paper we describe a robotic setup to study the self-organization of conceptualisation and language. What distinguishes this project from others is that we envision a robot with specic cognitive capacities, but without resorting to any pre-programmed representations or conceptualisations. The key to this all is self-organization and enculturation. We report preliminary results on learning motor behaviours through imitation, and sketch how the language plays a pivoting role in constructing world representations

    Coordinating perceptually grounded categories through language: A case study for colour

    Get PDF
    This article proposes a number of models to examine through which mechanisms a population of autonomous agents could arrive at a repertoire of perceptually grounded categories that is sufficiently shared to allow successful communication. the models are inspired by the main approaches to human categorisation being discussed in the literature: nativism, empiricism, and culturalism. colour is taken as a case study. although we take no stance on which position is to be accepted as final truth with respect to human categorisation and naming, we do point to theoretical constraints that make each position more or less likely and we make clear suggestions on what the best engineering solution would be. specifically, we argue that the collective choice of a shared repertoire must integrate multiple constraints, including constraints coming from communication.This research was sponsored by the Sony Computer Science Laboratory in Paris, the Flemish Fund for Scientific Research (FWO Vlaanderen), the OMLL initiative of the European Science Foundation, and the EU-FET ECAgents and Cogniron project.Peer reviewe

    Tactile interaction with a robot leads to increased risk-taking

    Full text link
    Tactile interaction plays a crucial role in interactions between people. Touch can, for example, help people calm down and lower physiological stress responses. Consequently, it is believed that tactile and haptic interaction matter also in human-robot interaction. We study if the intensity of the tactile interaction has an impact on people, and do so by studying whether different intensities of tactile interaction modulate physiological measures and task performance. We use a paradigm in which a small humanoid robot is used to encourage risk-taking behaviour, relying on peer encouragement to take more risks which might lead to a higher pay-off, but potentially also to higher losses. For this, the Balloon Analogue Risk Task (BART) is used as a proxy for the propensity to take risks. We study four conditions, one control condition in which the task is completed without a robot, and three experimental conditions in which a robot is present that encourages risk-taking behaviour with different degrees of tactile interaction. The results show that both low-intensity and high-intensity tactile interaction increase people's risk-taking behaviour. However, low-intensity tactile interaction increases comfort and lowers stress, whereas high-intensity touch does not.Comment: 10 pages, 5 figures, conferenc

    Talking About Task Progress: Towards Integrating Task Planning and Dialog for Assistive Robotic Services

    Get PDF
    The use of service robots to assist ageing people in their own homes has the potential to allow people to maintain their independence, increasing their health and quality of life. In many assistive applications, robots perform tasks on people’s behalf that they are unable or unwilling to monitor directly. It is important that users be given useful and appropriate information about task progress. People being assisted in homes and other realworld environments are likely be engaged in other activities while they wait for a service, so information should also be presented in an appropriate, nonintrusive manner. This paper presents a human-robot interaction experiment investigatingwhat type of feedback people prefer in verbal updates by a service robot about distributed assistive services. People found feedback about time until task completion more useful than feedback about events in task progress or no feedback. We also discuss future research directions that involve giving non-expert users more input into the task planning process when delays or failures occur that necessitate replanning or modifying goals

    Reinforcement learning and insight in the artificial pigeon

    Get PDF
    The phenomenon of insight (also called "Aha!" or "Eureka!" moments) is considered a core component of creative cognition. It is also a puzzle and a challenge for statistics-based approaches to behavior such as associative learning and reinforcement learning. We simulate a classic experiment on insight in pigeons using deep Reinforcement Learning. We show that prior experience may produce large and rapid performance improvements reminiscent of insights, and we suggest theoretical connections between concepts from machine learning (such as the value function or overfitting) and concepts from psychology (such as feelings-of-warmth and the einstellung effect). However, the simulated pigeons were slower than the real pigeons at solving the test problem, requiring a greater amount of trial and error: their "insightful" behavior was sudden by comparison with learning from scratch, but slow by comparison with real pigeons. This leaves open the question of whether incremental improvements to reinforcement learning algorithms will be sufficient to produce insightful behavior

    I Was Blind but Now I See: Implementing Vision-Enabled Dialogue in Social Robots

    Full text link
    In the rapidly evolving landscape of human-computer interaction, the integration of vision capabilities into conversational agents stands as a crucial advancement. This paper presents an initial implementation of a dialogue manager that leverages the latest progress in Large Language Models (e.g., GPT-4, IDEFICS) to enhance the traditional text-based prompts with real-time visual input. LLMs are used to interpret both textual prompts and visual stimuli, creating a more contextually aware conversational agent. The system's prompt engineering, incorporating dialogue with summarisation of the images, ensures a balance between context preservation and computational efficiency. Six interactions with a Furhat robot powered by this system are reported, illustrating and discussing the results obtained. By implementing this vision-enabled dialogue system, the paper envisions a future where conversational agents seamlessly blend textual and visual modalities, enabling richer, more context-aware dialogues.Comment: 8 pages, 3 figure

    The robot who tried too hard: social behaviour of a robot tutor can negatively affect child learning

    Get PDF
    Social robots are finding increasing application in the domain of education, particularly for children, to support and augment learning opportunities. With an implicit assumption that social and adaptive behaviour is desirable, it is therefore of interest to determine precisely how these aspects of behaviour may be exploited in robots to support children in their learning. In this paper, we explore this issue by evaluating the effect of a social robot tutoring strategy with children learning about prime numbers. It is shown that the tutoring strategy itself leads to improvement, but that the presence of a robot employing this strategy amplifies this effect, resulting in significant learning. However, it was also found that children interacting with a robot using social and adaptive behaviours in addition to the teaching strategy did not learn a significant amount. These results indicate that while the presence of a physical robot leads to improved learning, caution is required when applying social behaviour to a robot in a tutoring context

    Nonverbal immediacy as a characterisation of social behaviour for human-robot interaction

    Get PDF
    An increasing amount of research has started to explore the impact of robot social behaviour on the outcome of a goal for a human interaction partner, such as cognitive learning gains. However, it remains unclear from what principles the social behaviour for such robots should be derived. Human models are often used, but in this paper an alternative approach is proposed. First, the concept of nonverbal immediacy from the communication literature is introduced, with a focus on how it can provide a characterisation of social behaviour, and the subsequent outcomes of such behaviour. A literature review is conducted to explore the impact on learning of the social cues which form the nonverbal immediacy measure. This leads to the production of a series of guidelines for social robot behaviour. The resulting behaviour is evaluated in a more general context, where both children and adults judge the immediacy of humans and robots in a similar manner, and their recall of a short story is tested. Children recall more of the story when the robot is more immediate, which demonstrates an e�ffect predicted by the literature. This study provides validation for the application of nonverbal immediacy to child-robot interaction. It is proposed that nonverbal immediacy measures could be used as a means of characterising robot social behaviour for human-robot interaction

    Towards a full spectrum diagnosis of autistic behaviours using human robot interactions

    Get PDF
    Autism Spectrum Disorder (ASD) is conceptualised by the Diag-nostic and Statistical Manual of Mental Disorders (DSM-V) [1] asa spectrum, and diagnosis involves scoring behaviours in termsof a severity scale. Whilst the application of automated systemsand socially interactive robots to ASD diagnosis would increase ob-jectivity and standardisation, most of the existing systems classifybehaviours in a binary fashion (ASD vs. non-ASD). To be useful ininterventions, and to overcome ethical concerns regarding overlysimplied diagnostic measures, a robot therefore needs to be ableto classify target behaviours along a continuum, rather than indiscrete groups. Here we discuss an approach toward this goalwhich has the potential to identify the full spectrum of observableASD traits
    corecore